Goto

Collaborating Authors

 leap motion


Analysis of sensors for movement analysis

Faundez-Zanuy, Marcos, Faura-Pujol, Anna, Montalvo-Ruiz, Hector, Losada-Fors, Alexia, Genovese, Pablo, Sanz-Cartagena, Pilar

arXiv.org Artificial Intelligence

In this paper we analyze and compare different movement sensors: micro-chip gesture-ID, leap motion, noitom mocap, and specially developed sensor for tapping and foot motion analysis. The main goal is to evaluate the accu-racy of measurements provided by the sensors. This study presents rele-vance, for instance, in tremor/Parkinson disease analysis as well as no touch mechanisms for activation and control of devices. This scenario is especially interesting in COVID-19 scenario. Removing the need to touch a surface, the risk of contagion is reduced.


Holz, founder of AI art service Midjourney, on future images

#artificialintelligence

Interview In 2008, David Holz co-founded a hardware peripheral firm called Leap Motion. He ran it until last year when he left to create Midjourey. Midjourney in its present form is a social network for creating AI-generated art from a text prompt – type a word or phrase at the input prompt and you'll receive an interesting or perhaps wonderful image on screen after about a minute of computation. It's similar in some respects to OpenAI's DALL-E 2. Midjourney image of the sky and clouds, using the text prompt "All this useless beauty." Both are the result of large AI models trained on vast numbers of images. But Midjourney has its own distinctive style, as can be seen from this Twitter thread.


Midjourney founder says 'the world needs more imagination'

#artificialintelligence

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! In April 2022, OpenAI -- the artificial intelligence (AI) company cofounded by Elon Musk, Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba and John Schulman -- debuted DALL-E 2, an AI tool that can create realistic images and art from a description in natural language, like "teddy bears working on new AI research on the moon in the 1980s," for instance. In an attempt to take a step toward artificial general intelligence (AGI) by rendering it with the sense of sight, OpenAI created an internet sensation. In the company's words, "DALL-E 2 will empower people to express themselves creatively."


Communicative Learning with Natural Gestures for Embodied Navigation Agents with Human-in-the-Scene

Wu, Qi, Wu, Cheng-Ju, Zhu, Yixin, Joo, Jungseock

arXiv.org Artificial Intelligence

Human-robot collaboration is an essential research topic in artificial intelligence (AI), enabling researchers to devise cognitive AI systems and affords an intuitive means for users to interact with the robot. Of note, communication plays a central role. To date, prior studies in embodied agent navigation have only demonstrated that human languages facilitate communication by instructions in natural languages. Nevertheless, a plethora of other forms of communication is left unexplored. In fact, human communication originated in gestures and oftentimes is delivered through multimodal cues, e.g. "go there" with a pointing gesture. To bridge the gap and fill in the missing dimension of communication in embodied agent navigation, we propose investigating the effects of using gestures as the communicative interface instead of verbal cues. Specifically, we develop a VR-based 3D simulation environment, named Ges-THOR, based on AI2-THOR platform. In this virtual environment, a human player is placed in the same virtual scene and shepherds the artificial agent using only gestures. The agent is tasked to solve the navigation problem guided by natural gestures with unknown semantics; we do not use any predefined gestures due to the diversity and versatile nature of human gestures. We argue that learning the semantics of natural gestures is mutually beneficial to learning the navigation task--learn to communicate and communicate to learn. In a series of experiments, we demonstrate that human gesture cues, even without predefined semantics, improve the object-goal navigation for an embodied agent, outperforming various state-of-the-art methods.


Visual Rendering of Shapes on 2D Display Devices Guided by Hand Gestures

Singla, Abhik, Roy, Partha Pratim, Dogra, Debi Prosad

arXiv.org Machine Learning

Designing of touchless user interface is gaining popularity in various contexts. Using such interfaces, users can interact with electronic devices even when the hands are dirty or non-conductive. Also, user with partial physical disability can interact with electronic devices using such systems. Research in this direction has got major boost because of the emergence of low-cost sensors such as Leap Motion, Kinect or RealSense devices. In this paper, we propose a Leap Motion controller-based methodology to facilitate rendering of 2D and 3D shapes on display devices. The proposed method tracks finger movements while users perform natural gestures within the field of view of the sensor. In the next phase, trajectories are analyzed to extract extended Npen++ features in 3D. These features represent finger movements during the gestures and they are fed to unidirectional left-to-right Hidden Markov Model (HMM) for training. A one-to-one mapping between gestures and shapes is proposed. Finally, shapes corresponding to these gestures are rendered over the display using MuPad interface. We have created a dataset of 5400 samples recorded by 10 volunteers. Our dataset contains 18 geometric and 18 non-geometric shapes such as "circle", "rectangle", "flower", "cone", "sphere" etc. The proposed methodology achieves an accuracy of 92.87% when evaluated using 5-fold cross validation method. Our experiments revel that the extended 3D features perform better than existing 3D features in the context of shape representation and classification. The method can be used for developing useful HCI applications for smart display devices.


Static Gesture Recognition using Leap Motion

Toghiani-Rizi, Babak, Lind, Christofer, Svensson, Maria, Windmark, Marcus

arXiv.org Machine Learning

In this report, an automated bartender system was developed for making orders in a bar using hand gestures. The gesture recognition of the system was developed using Machine Learning techniques, where the model was trained to classify gestures using collected data. The final model used in the system reached an average accuracy of 95%. The system raised ethical concerns both in terms of user interaction and having such a system in a real world scenario, but it could initially work as a complement to a real bartender.


PC Makers Bet on Gaze, Gesture, Voice, and Touch

AITopics Original Links

Products that could make it common to control a computer, TV, or something else using eye gaze, gesture, voice, and even facial expression were launched at the Consumer Electronics Show in Las Vegas this week. The technology promises to make computers and other devices easier to use, let devices do new things, and perhaps boost the prospects of companies reliant on PC sales. Industry figures suggest that interest in laptop and desktop computers is waning as consumers' heads are turned by smartphones and tablets. Intel led the charge, using its press briefing Monday to announce a new webcam-like device and supporting software intended to bring gesture, voice control, and facial expression recognition to PCs. "This will be available as a low-cost peripheral this year," said Kirk Skaugen, vice president for Intel's PC client group. "Rest assured that Intel's working to integrate this with all-in-ones and Ultrabooks, too."


Apple's move into 3D sensors has intriguing possibilities

AITopics Original Links

Apple just bought itself a 3D sensor company, a move that has some intriguing possibilities. This past weekend, Apple confirmed its acquisition of PrimeSense, an Israel-based company best known for its work on the original Microsoft Kinect, a gaming accessory that lets you control on-screen action by moving your body. "I think it's very big news," says David Fleet, a computer science professor at the University of Toronto. Fleet studies machine vision systems and cites the success of the Kinect as a "big win," adding "I think many more applications are on the horizon." PrimeSense's 3D sensing system uses an infrared emitter to shoot out beams of invisible infrared light.


Leap Motion will bring your hands into mobile VR

Engadget

Leap Motion has been working on making your interactions in VR as realistic as possible, but it's only been available to desktop or console systems. Now, the company has expanded its scope to mobile devices with its new Mobile Platform, designed for "untethered, battery-powered virtual and augmented reality devices." It has built a reference system of its new sensor and platform on top of a Gear VR, that it says it is shipping to headset makers around the world. Leap Motion is also bringing demos of its Interaction Engine (for natural hand gestures) in this portable medium to major VR events this month. To enable hand-tracking for such devices, Leap Motion had to make a sensor that performed better but consumed less power.


Leap Motion

#artificialintelligence

Leap Motion is transforming how we interact with technology using the original interface: the human hand. Over the last two years, we've shipped almost half a million motion-tracking controllers to developers and consumers around the world, opening up new possibilities for a platform beyond the screen – from music and gaming to the next generation of VR/AR interfaces. By bringing people and computing devices closer together, a career at Leap Motion offers the opportunity to help bring science fiction to life. Our software is rapidly evolving, and we're looking for a build engineer to help take it to the next level. This would involve managing our substantial continuous integration automation infrastructure, contributing to improving code coverage, and helping us automate as much of our build and testing pipeline as possible.